Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

1.1 for Elections. TIA, Even "Plainer" English. Law of Large Numbers,etc

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Election Reform Donate to DU
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:03 PM
Original message
1.1 for Elections. TIA, Even "Plainer" English. Law of Large Numbers,etc
Edited on Wed Dec-14-05 04:04 PM by autorank

The Law of Large Numbers & Central Limit Theorem:
A Polling Simulation


TruthIsAll


WHO SHOULD READ THIS?

It's for everyone who voted in 2004 or plans to vote in 2006.

It's for those who say: "Math was my worst subject in high school".
If you've ever placed a bet at the casino or race track,
or played the lottery, you already know the basics.
It's about probability.
It's about common sense.
It's not all that complicated.

It's for individuals who have taken algebra, probability and
statistics and want to see how they apply to election polling.

It's for graduates with degrees in mathematics, political science,
an MBA, etc. who may or may not be familiar with simulation concepts.

It's for Excel spreadsheet users who enjoy creating math models.
Simulation is a powerful tool for analyzing uncertainty.
Like coin flipping and election polling.

It's for writers, blogs and politicians who seek the truth:
Robert Koehler, Brad from BradBlog, John Conyers, Barbara Boxer,
Mark Miller, Fitrakis, Wasserman, USCV, Dopp, Freeman, Baiman, Simon,
Scoop's althecat, Krugman, Keith Olberman, Mike Malloy, Randi Rhodes,
Stephanie Miller, etc.

It's for Netizens who frequent Discussion Forums.

It's for those in the Media who are still waiting for editor approval
to discuss documented incidents of vote spoilage, vote switching and
vote suppression in recent elections and which are confirmed by
impossible pre-election and exit poll deviations from the recorded vote.

It's for naysayers who promote faith-based hypotheticals in their
unrelenting attempts to debunk the accuracy of the pre-election
and exit polls.

People forget Selection 2000. Gore won the popular vote by 540,000.
But Bush won the election by a single vote.
SCOTUS voted along party lines: Bush 5, Gore 4.
That stopped the Florida recount in its tracks.
Gore won Florida. Why did they do it?
And why did the "liberal" media say he lost?

But Gore voters did not forget 2000.
So in 2004, they came out to vote in droves.
Yet the naysayers claim Gore voters forgot that they voted for him
and told the exit pollsters that they voted for Bush in 2000.
It's the famous "false recall" hypothetical.
The naysayers were forced to use it when they could not come up
with a plausible explanation for the impossible weightings of
Bush and Gore voter turnout in the Final National Exit poll.

Put on the defoggers.
We had enough disinformation
We had enough obfuscation.
Now we will let the sunshine in.

This is a review of the basics.

________________________________________________________________________

A COIN-FLIP EXPERIMENT

Consider an experiment:
Flip a fair coin 10 times.
Calculate the percentage of heads.
Write it down.

Increase it to 30.
Calculate the new total percentage.
Write it down.

Keep increasing the number of flips...
Write down the percentage for 50.
Then do it for 80.
Stop at 100.
That's our final coin flip sample-size.

When you're all done, check the percentages.
Is the sequence converging to 50%?
That’s the true population mean (average).

That's the Law of Large Numbers.

The coin-flip is easily simulated in Excel.
Likewise, in the polling simulations which follow,
we will analyze the result of polling experiments
over a range of trials (sample size).

_____________________________________________________

THE POLLING CONTROVERSY

Naysayers have a problem with polls.
Especially when a Bush is running.
Regardless of how many polls or how large the samples,
the results are never good enough for them.
They prefer to cite their two famous, unproven hypotheticals:
Bush non-responders (rBr) and Gore voter memory lapse ("false recall").

How do pollsters handle non-responders?
Simple.
They just... increase the sample-size!
Furthermore, statistical studies indicate that there is no
discernible correlation between non-response rates and survey results

How do pollster's handle false recall?
Simple.
They know that in a large sample, forgetfullness on the part
of Gore and Bush voters... will cancel each other out!
There's no evidence that Gore voters forget any more than Bush voters.
On the contrary.
If someone you knew robbed you in broad daylight,
would you forget who it was four years later?
Gore was robbed in 2000.

They claim that polling bias favored Kerry
in BOTH the pre-election AND exit polls.
They offer no evidence to back up these claims.
In fact, National Exit Poll data shows a pro-Bush bias.

They maintain that the polls are not random-samples.
Especially when Bush is involved.

_____________________________________________________

THE MARGIN OF ERROR (MOE)

Naysayers ignore the fact that each poll has a Margin of Error (MoE).
Are we to ignore the MoE provided by a professional pollster?

The MoE is the interval on either side of the Polling Sample mean
in which there is a 95% confidence level (probability) of containing
the TRUE Population Mean.

Here is an example:
Assume a poll with a 2% MoE and Kerry is leading Bush by 52-48%.
Then there is a 95% probability that Kerry's TRUE vote is in the range
from 50% to 54% {52-MoE, 52+MoE}.

Futhermore, the probability is 97.5% that Kerry's vote will exceed 50%.

Here is the standard formula that ALL pollsters use to calculate MoE:

MoE = 1.96 * sqrt(p*(1-p)/n) * (1+CF)
where
n is the sample size.
p and 1-p are the 2-party vote shares.
CF is an exit poll "cluster effect" factor (see the example below).

The MoE decreases as the sample-size (n) increases.
The poll becomes more accurate as we take more samples.
It's the Law of Large Numbers again.
Makes sense, right?
Remember the coin flips?

This result is not so obvious.
For a given sample size (n), the MoE is at it's maximum value
when p =.50 (the two candidates are tied).
To put it another way:
The more one-sided the poll, the smaller the MoE.
In the 50/50 case, the formula can be simplified:
MoE = 1.96 * .5/sqrt(n) =.98/sqrt(n)

Let's calculate the MoE for the 12:22am National Exit poll.
n = 13047 sampled respondents
p = Kerry's true 2-party vote share = .515
1-p = Bush's vote share = .485

MoE = 1.96 * sqrt (.515*.485/13047)= .0086 = 0.86%
Adding a 30% exit poll cluster effect:
MoE = 1.30*0.86% = 1.12%

The cluster effect is highly controversial.
We can only make a rough estimate of its impact on MoE.
The higher the cluster effect, the larger the MoE.
But cluster is only a factor in exit polls.
There is no MoE adjustment in pre-election or approval polls.

Why would a polling firm include the MoE if the poll was
not designed to be an effective random sample?

Pollsters use proven methodologies, such as cluster sampling,
stratified sampling, etc. to attain a near-perfect random sample.
________________________________________________________________

THE MATHEMATICAL FOUNDATION

This model demonstrates the Law of Large Numbers (LLN).
LLN is the foundation and bedrock of statistical analysis.
The model illustrates LLN through a simulation of polling samples.

In a statistical context, LLN states that the mean (average)of a
random sample taken from from a large population is likely
to be very close to the (true) mean of the population.

Start of math jargon alert...
In probability theory, several laws of large numbers say that
the mean (average) of a sequence of random variables with
a common distribution converges to their common mean as
the size of the sequence approaches infinity.

The Central Limit Theorem (CLT) is another famous result:
The sample means (averages) of an independent series of
random samples (i.e. polls) taken from the same population
will tend to be normally distributed (form the bell curve)
as the number of samples increase.
This holds for ALL practical statistical distributions.
End of math jargon alert....

It's really not all that complicated.
The naysayers never consider LLN or CLT.
They would have us believe that professional pollsters are
incapable of creating accurate surveys (i.e. effectively random
samples) through systematic, clustered or stratified sampling.
Especially when a Bush is running.

LLN and CLT say nothing about bias.

__________________________________________________________________

USING RANDOM NUMBERS TO SIMULATE A SEQUENCE OF POLLS

Random number simulation is the best way to illustrate LLN:
These are the steps:
1) Assume a true 2-party vote percentage for Kerry (i.e. 51.5%).
2) Simulate a series of 8 polls of varying sample size.
3) Calculate the sample mean vote share and win probability for each poll.
4) Confirm LLN by noting that as the poll sample size increases,
the sample mean (average) converges to the population mean ("true" vote).

It's just like flipping a coin.
Let Kerry be HEADS, with a 51.5% chance of winning a random voter.
This is Kerry's TRUE vote (the population mean)
Bush is TAILS with a 48.5% chance.

A random number (RN) between zero and one is generated for each respondent.
If RN is LESS than Kerry's TRUE share, the vote goes to Kerry.
If RN is GREATER than Kerry's TRUE share, the vote goes to Bush.

For example, assume Kerry's TRUE 51.5% vote share (.515).
If RN = .51, Kerry's poll count is increased by one.
If RN = .53, Bush's poll count is increased by one.

The sum of Kerry's votes is divided by the poll sample (i.e. 13047).
This is Kerry's simulated 2-party vote share.
It approaches his TRUE 51.50% vote share as poll samples increase.
Once again, the LLN applies as it did in the coin flip experiment.

________________________________________________________________


SIMULATION GRAPHICS

These graphs are a visual summary of the simulation.





________________________________________________________________

RUNNING THE SIMULATION

Press F9 run the simulation
Watch the numbers and graphs change.
They should NOT change significantly.

The graphs illustrate polling simulation output for:
Kerry's 2-party vote (true population mean): 51.50%

Exit Poll Cluster effect (zero for pre-election):30%
The exit poll "cluster effect" is the incremental adjustment
to the margin of error in order to account for the clustering
of individuals with similar demographics at the exit polling site.

Play what-if:
Lower Kerry's 2-party vote share from 51.5% to 50.5%.
Press F9 to run the simulation.
Kerry's poll shares, corresponding win probabilities and
minimal threshold vote (97.5% confidence level), all DECLINE,
reflecting the lowering of his "true vote".

________________________________________________________________

POLLING SAMPLE-SIZE

Just like in the above coin-flipping example, the
Law of Large Numbers takes effect as poll sample-size increases.

That's why the National Exit Poll was designed to
survey at least 13000 respondents.

Note the increasing sequence of polling sample size as we go
from the pre-election state (600) and national (1000) polls
to the state and National exit polls:
Ohio (1963), Florida (2846) and the National (13047).

Here is the National Exit Poll Timeline:
Updated ; respondents ; vote share
3:59pm: 8349 ; Kerry led 51-48
7:33pm: 11027 ; Kerry led 51-48
12:22am:13047 ; Kerry led 51-48

1:25pm: 13660 ; Bush led 51-48
The final was matched to the vote.
So much for letting LLN and CLT do their magic.
Especially when a Bush is running.

________________________________________________________________

CALCULATING PROBABILITIES

The Kerry win probabilities are the main focus of the simulation.
They closely match theoretical probabilities obtained from
the Excel Normal Distribution function.

The probabilities are calculated using two methods:
1) running the simulation and counting Kerry's total polling votes.
2) calculating the Excel Normal Distribution function:
Prob = NORMDIST(PollPct, 0.50, MoE/1.96, true)

The simulation shows that given Kerry's 3% lead in the 2-party vote
(12:22am National Exit Poll), his popular vote win probability
was nearly 100%. And that assumes a 30% exit poll cluster effect!

For a 2% lead (51-49), the win probability is 97.5% (still very high).
For a 1% lead (50.5-49.5), it's 81% (4 out of 5).
For a 50/50 tie, it's 50%. Even money. Makes sense, right?

The following probabilities are also calculated for each poll:
1) The 97.5% confidence level for Kerry's vote share.
There is a 97.5% probability that Kerry's true vote will be greater.
The minimum vote share increases as the sample size grows.

2) The probability of Bush achieving his recorded two-party vote (51.24%).
The probability is extremely low that Bush's actual vote would deviate
from his true 48.5% two-party share.
The probability declines as the sample size grows.

________________________________________________________________

DOWNLOADING THE EXCEL MODEL

Wait one minute for the Excel model download.
It's easy.
Just two inputs -
Kerry's 2-party true vote share (51.5%) and
exit poll cluster effect (set to 30%).

Press F9 to run the simulation.

http://us.share.geocities.com/electionmodel/MonteCarloPollingSimulation.xls

Or go here for a complete listing of threads from
TruthIsAll: www.TruthIsAll.net

________________________________________________________________






Printer Friendly | Permalink |  | Top
Angry Girl Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:28 PM
Response to Original message
1. For ALL of you who ever told me the elections weren't fixed...
Edited on Wed Dec-14-05 04:31 PM by Angry Girl
Do your bloody homework for a change and READ what AutoRank has to say!

I'm SO SICK of arguing with ignorant DUers who can't be bothered to work a little to find the truth. Just because you were lazy or incurious enough to have neglected math in school shouldn't mean I have to suffer for it. Because that's exactly what's happened.

I don't care if you spend all your money on lottery tickets because you don't understand statistics, but DON'T make your ignorance have the terrible consequence of keeping this totalitarian dictator in office! Which is exactly what's been happening.

Thanks, Autorank, for never giving up. knr

p.s. Anybody who gives me a Bayesian argument against this gets put on Ignore.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:38 PM
Response to Reply #1
3. for the record, I've been working for over a year. n/t
Printer Friendly | Permalink |  | Top
 
Angry Girl Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:43 PM
Response to Reply #3
4. Huh? I must have missed something...
But I'm easily confused. What's up with the working thing?
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 05:12 PM
Response to Reply #4
6. sorry, I was rushing!
I was reacting to your, "I'm SO SICK of arguing with ignorant DUers who can't be bothered to work a little to find the truth." There are plenty of folks who dismiss the statistical arguments for fraud without having considered them -- and there are plenty who accept them too readily, too. That's all.

I explained just below why I don't find TIA's post very helpful, although it's probably perfectly fine as an introduction to sampling theory. I think election fraud is a real concern, and I think there is little evidence that it was decisive in 2004. (I think there is plenty of evidence that some voters got messed over in 2004, which is unacceptable regardless.) So, you start to get a sense of why I chose OnTheOtherHand as my screen name.... Angry is good too: there is plenty to be angry about.
Printer Friendly | Permalink |  | Top
 
Angry Girl Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 07:38 PM
Response to Reply #6
13. Ahhh! I'm unemployed so I thought maybe congrats were in order...
You're right, though. I was just having a very frustrated moment. But I do think that the work done by AutoRank et al. is worthy of being looked at. I wish this election thing were a two-second sound byte but it isn't.... *sigh* Thanks for being the "other hand"! Balance and sanity are good things, in moderation, too! :-)
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 08:58 PM
Response to Reply #1
14. Thank you Angry Girl, stay that way!
:yourock:
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:35 PM
Response to Original message
2. demonstrates why TIA is irrelevant to the debate

How do pollsters handle non-responders?
Simple.
They just... increase the sample-size!

If a poll is biased, then increasing the sample size will not help. If anything, it will just give a false sense of security to people who think Margin of Error is all that matters. The 1936 Literary Digest poll got over 2 million responses, and it projected that Alf Landon would get 57% of the vote.


How do pollster's handle false recall?
Simple.
They know that in a large sample, forgetfullness on the part
of Gore and Bush voters... will cancel each other out!

Second verse, same as the first, a little bit longer and a little bit worse.

A handy example of apparent false recall: in the 2002 General Social Survey, 885 respondents recalled having voted for Bush and 781 recalled having voted for Gore. That gives Bush 53.1% of the two-party vote out of these 1666 respondents. Using TIA's handy MoE formula, we can figure that the margin of error for this percentage is about 2.4% -- so this 2002 survey conclusively demonstrates...
  • either that Gore stole the popular vote in 2000 (I'm thinking this is very unlikely)

  • or that the sample was biased toward Republicans (but self-identified Democrats substantially outnumber Republicans in this sample, so I wouldn't put money on that if I were you)

  • or that some Gore voters wrongly said that they voted for Bush (aka "false recall"), which is what I think, since it seems to happen after almost every election

  • or some combination of these things, and maybe some odds and ends (poll fraud, anyone?)
Guess what. If false recall doesn't cancel out in 1666 respondents, it isn't going to cancel out in 10,000 respondents, or 100,000 respondents, or 2 million respondents.

I am So. Very. Sick. of apparently intelligent people claiming that Really Big Samples somehow make all other polling problems go away. That isn't the Law of Large Numbers: that is wishful thinking.
Printer Friendly | Permalink |  | Top
 
rock Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 04:44 PM
Response to Original message
5. The Law of Large Numbers
As the number of trials increases the percentage gets closer to the expected, while, surpringly, the absolute error between actual and the theorectical gets worse.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 05:18 PM
Response to Reply #5
7. on average, yeah
So if you flip a coin 10 times, most of the time you will get 5 heads plus-or-minus 2 -- but if you flip it one million times and get 500,000 plus-or-minus 2, that is actually kind of suspicious. Statistics is cool.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 05:20 PM
Response to Original message
8. Wow, Version 1.1 must be really good!
The flack attack has started early. Must be a little "red light alert" that goes off.

This is not mandatory reading, btw, but it must be compelling stuff given all the comments:rofl:

Attention is the highest form of compliment for controversial positions. Although this position is not controversial, it's just made to seem that way.

We all know what happened in 2004, well most of us, and it wasn't pretty.
We also know there's no point in enabling a bully, *, by attacking his critics.

I like this, "plain English" demystifies the arcane and makes it available to the public.

I feel like Gutenberg in the morning, it feels like victory.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 05:23 PM
Response to Reply #8
9. Lest we forget our history!
http://en.wikipedia.org/wiki/Literary_Digest

In November, Landon carried only Vermont in addition to the Maine electoral votes that he had already garnered; President Franklin Delano Roosevelt carried the then-46 other states; Landon's electoral vote total of eight is a tie for the record low for a major-party nominee since the current U.S. two-party system began in the 1850s. The magazine was completely discredited due to the poll and was soon discontinued.

In retrospect, the polling techniques employed by the magazine were to blame. It had surveyed firstly its own readers, a group with disposable incomes well above the national average of the time (shown in part by their ability still to afford a magazine subscription during the depths of the Great Depression). This base was supplemented by two other readily-available lists, that of registered automobile owners and that of telephone users. While such lists might come close to providing a statistically-accurate cross-section of Americans currently, this assumption was mainifestly untrue in the 1930s. Both groups had income well above the national average of the day, and resulted in lists of voters far more likely to support Republicans than a truly typical voter of the time.

This debacle led to a considerable refinement of public opinion polling techniques and was largely regarded as spurring the beginning of the era of modern scientific public opinion research.
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 06:26 PM
Response to Reply #9
12. that might be why NEP was off by 3 points instead of 20 or so
Just a thought.

I can't help it, I just roll my eyes -- there are the people who say that the polls provide clear evidence of fraud, and then there are the actual researchers, whose opinions seem generally to range from "Probably not" through "Hell, no!" to "What, are you stupid?" And then there are the cockeyed optimists like me who try to get the two to communicate with each other.

Hey, carry on. I hear that constant repetition can work wonders.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 09:06 PM
Response to Reply #12
15. Just for the record...
You're on what I call virtual ignore. As you may have noticed, or not, I don't ever respond to your posts. I'm not going to use the ignore feature, that's too strange, but, other than this missive to let you know, I will NEVER have a dialog with you.

As a parting gesture of good will, I've noticed that this thread and others make you "sick" (OTOH: "I am So. Very. Sick...") and "roll your eyes."

These threads are not mandatory for you or anyone else. If they seem repetitive, there are a lot of people out there who come here and other places only on occasion, hence the recursive nature of our work. Nevertheless, what is offered is generally new or upgraded information. Finding yourself coming back to what you call the same old thing over and over again probably means you might want to avoid that "constant repetition," as you call it, and seek a healthier alternative. It's that time of year.

Just for the record...:hi: and so long, forever.
Printer Friendly | Permalink |  | Top
 
Cocoa Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 11:38 PM
Response to Reply #15
16. why?
that's what you left out. The reason you won't be corresponding with OTOH.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 04:27 AM
Response to Reply #15
20. autorank, you don't ignore -
You post these things from TIA, many of which (in their original versions at least) call on OTOH and me by name, or if not, by "naysayers". If you were "virtually ignoring" us, you'd ignore TIA's posts, which are certainly not ignoring us.

So to post these things, then express either glee or irritation when either of us comes out to respond, is having your cake and eating it.

TIA's post, far from ignoring me, appears to be a response to my insistence that his analyses assume RANDOM SAMPLING, which would make "sampling error" (the error associated with random sampling) the the only source of error. In fact there are others. TIA has mentioned two: non-response bias and false recall. However, they cannot be dismissed as TIA does.

TIA asserts, without reference, that non-response bias can be dealt with by increasing sample size, which is simply not true. Increasing sample size will simply reduce the "sampling error", which will actually tend to make any bias more "statistically significant". I have been trying to explain this to people, including TIA, for months, as has OTOH. TIA furthermore asserts that "statistical studies indicate that there is no discernible correlation between non-response rates and survey results". He does not reference these studies, but it is certainly true that bias is not necessarily directly related to the degree of non-reponse - rather it is a function of the degree to which the non-responding population differs from the responding population. Even a low rate of non-response will introduce a large degree of bias if the non-responding population is characterised by a feature you are interested in (like how they voted). See below.

Similarly, TIA asserts, without reference, that "false recall" will "cancel...out". It will if it does. It won't if it doesn't. And again, if it doesn't, the larger the sample, the more "statistically significant" will be the resulting bias.

TIA gives an excellent introduction to the nature of sampling error, and the way in which it reduces as sample size increases.

But he is plain wrong when he asserts that other kinds of error also decrease with large numbers. The reverse is the case. They actually become more "significant".

Ignore this if you like. But you are going to channel TIA's statistics class (some of which is good and useful) then it ought not to be allowed to stand without these important corrigenda.

The WHOLE POINT of my case that we have to consider "non-sampling" error in polls is that it is non-sampling error is NOT eliminated by large sample sizes. If your post from TIA is stats 101, what I have just written is stats 102. Actually it should be the first sentence of stats 101:

"Assuming random sampling...."

And while I'm here, I noticed a while back that TIA cited a webpage that advised that if you expect non-response, you should increase your sample size. This is simply because if you calculate that to get the MoE you want you need a sample of 100 respondents, and you expect only 50% response, you will need to attempt to recruit 200 respondents. This will simply ensure that your MoE is not too large. It will do nothing for any non-response bias.

There are entire books written about non-response bias. The reason for this is because it happens. Here is a quotation from one of them:

Furthermore the non-response rate may not relate to the extent of the error that is engendered by the non-response. The resulting loss of sample size will of course inflate the variance of the estimators, but the degree of bias will depend on how typical (or atypical) is the non-responding group of the population as a whole.


from: Sample Survey: Principles and Methods by Vic Barnett, OUP. Page 161. My bold.

And the take-home message is: the larger the survey, the smaller the sampling error (and the smaller the "MoE") - but the smaller the MoE, the more significant will be any bias in the survey. The exit poll discrepancy was massively significant, not because it was especially large (although it was larger than usual), but because the MoE was so small. So we know it was a real discrepancy. What we want to know is whether the bias was in the poll or the count. It could have been either. I happen to think that the balance of the evidence suggests it was in the poll, or most of it. I could be wrong. But simply asserting that it could not have been in the poll because the MoE was so small displays lack of knowledge about the kinds of errors to which surveys are subject.

I commend Vic Barnett's book to interested readers, or any other text-book on survey research.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:17 AM
Response to Reply #20
24. febble: "TIA ..an excellent introduction to the nature of sampling error"
Can we put that on the jacket?

I did, I do, adieu.

Relax, it's just math to you. To us it's a dictatorship. That's why we get upset when they steal elections.

Happy Festivus!!! and NAMASTE
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:30 AM
Response to Reply #24
26. Actually
it matters to all of us. America is a powerful country.

NAMASTE
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:48 AM
Response to Reply #24
27. And if you want something for the jacket
you are welcome to this:

TIA gives an excellent introduction to the nature of sampling error, and the way in which it reduces as sample size increases.

But he is plain wrong when he asserts that other kinds of error also decrease with large numbers. The reverse is the case. They actually become more "significant".


Cheers

Lizzie
Printer Friendly | Permalink |  | Top
 
Chi Donating Member (921 posts) Send PM | Profile | Ignore Thu Dec-15-05 03:49 PM
Response to Reply #20
35. Not sure why you said this, this way....
"Increasing sample size will simply reduce the "sampling error", which will actually tend to make any bias more "statistically significant"."

You give the "impression", that the bias actually increases, that's not what you are saying, is it?

The only reason it's more "statistically significant", is that it now is a larger portion of the total error.
It's still the same portion of error, compared to the total survey.
(a 3% bias, remains a 3% bias, even after the sampling error is improved.)
Do I have this right?
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:46 PM
Response to Reply #35
38. Yes, that is right
and it was certainly not my intention to mislead.

Statistics is all about signal to noise. As the noise reduces, the signal will become clearer. If that signal is bias, a smaller MoE will mean there is more chance of any bias being statistically detectable.

So a huge exit poll sample will give you enough statistical power to detect quite small amounts of bias. Unfortunately what it won't tell you is whether the bias is in the poll or the count. Bias in the poll is just as possible in a large sample as in a small sample - but you might not detect it, statistically, in a smaller sample, because it may be lost in the MoE - lost in the noise.

Which is not to say that the exit poll discrepancy wasn't large - it was. And of course it was massively significant, statistically, because the sample was so large.

But what we can't say is that BECAUSE it was statistically significant it can't have been due to bias in the poll. Maybe it wasn't. But maybe it was. All I'm saying is that the fact that it was statistically significant doesn't tell us whether the bias was in the poll or in the count.

And to be really clear: a 3% bias in a small sample could still be bias - but if the MoE was 4%, you wouldn't know. However, 3% bias in a large sample where the MoE was, say, 1%, it would be significant - so you'd know it was really bias.

But you still wouldn't know whether the bias was in the poll or the count. Correlational analysis doesn't establish causality. And we know that random sampling is actually difficult - bias is a perennial problem for surveys. So there is, sadly, no a priori reason for assuming that a survey sample is random. They often aren't.

Printer Friendly | Permalink |  | Top
 
Chi Donating Member (921 posts) Send PM | Profile | Ignore Thu Dec-15-05 08:56 PM
Response to Reply #38
41. OK, still making sure I get this
If there were 2 parallel exit polls done (one having 10k samples and the other having 50k samples), and we found that the
discrepancy (between poll and count) increased on the larger study, could we conclude that bias (poll or count) is the logical affect?
(Considering all other factors that contribute to discrepancy go down with larger samples.)


Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 03:36 AM
Response to Reply #41
46. Well
Edited on Fri Dec-16-05 04:16 AM by Febble
the short answer to your question is, any difference in magnitude between the discrepancies in the two studies wouldn't tell you whether the problem was in the poll or the count.

If the actual bias in the two studies were the same (or the actual amount of fraud in the two studies where the same) there should not be a "significant" difference" between the amount of bias detected in each study, even if both studies were large enough to detect it (and both your examples are large studies), and even if the actual magnitude of the bias in each study were slightly different. In other words if your small study had a "significant" bias of 3% and your large study had a "significant" bias of 4%, provided there were no real difference in the bias, these two values (3% and 4%) shouldn't be "significantly" different from each other. What your significance test tells you is that both 3% and 4% are "significantly" different from 0%.

But you raise an interesting point, one that has some apparently paradoxical implications. For example, the tests* we use tell us whether the discrepancy we observe is "significantly" different from zero - not whether its actual value is correct. So if study A has the statistical power to detect a discrepancy greater than 5% but study B has the power to detect a discrepancy of 2%, you might find that study A showed a "non-significant" discrepancy of 4%, while study B showed a "significant" discrepancy of 3%. But to make matters worse, you would probably also find that the discrepancy in study A was not "significantly" different from the discrepancy in study B. And worse still, it might be that the "true" discrepancy is 2%. All we would know (from study B) is that it isn't zero.

Another issue is that as well as bias in the polls, there may be errors that are not particularly biased one way or the other - but are nonetheless not simply the result of "sampling error". For example, it seems clear from what we know that the "absolute" value of the precinct-level discrepancies are higher than you'd expect from sampling error. In other words, there is noise that is not sampling error noise, but is not bias either. My own theory is that this is due to uneven "coverage" of each precinct. If voters tend to arrive in clumps (a group of Dem friends, a group of Rep friends) AND if the interviewer has breaks (to call in results) or there are busy times when the interviewer has to record more misses, there will be random fluctuations in the apparent "bias" that nonetheless averages out to zero (in one precinct, a clump of Dem voters are missed; in another, a clump of Rep voters are missed). This will increase the variance - the noise - and reduce the power to detect real bias in either poll or count.

None of this alters the fact that there was real bias in the 2004 exit poll. But the size of the study does not tell us where the bias lay. There is no sure way to tell. The best approach is to produce hypotheses about what would correlate with bias if it was a poll problem (e.g. interviewing rate, interviewer training) and what would correlate with bias if it was a vote-corruption problem (e.g.better results for Bush).

So far, my reading of the evidence is telling me that the poll hypotheses are doing better than the vote-count hypotheses, which is why I think the evidence better supports the case for a biased poll than the case for fraud.

But at the risk of being a bore: the fact that a discrepancy is "significant" doesn't tell you whether the bias was in the poll or the count.


The longer answer is that there may be patterns of discrepancy that might be informative, and the larger the study, the more power it would have to reveal these patterns.

*edit: this is simplified to the point of error! There are of course many tests we can use. Probably the best way of thinking about it, again, is in terms of a Margin of Error, aka Confidence Limits (CL). In a large study we can put tight confidence limits on our estimates of an effect. If we find a discrepancy of 4%, a large study may tell you that the "true" discepancy is between 3.5% and 4.5%, while a small study may tell you it is between -1% and 9%. So the small study won't even tell you whether it is "significantly" different from zero.

However if you had two studies, and one got a discrepancy of 4%, plus or minus 1%, and another had a discrepancy of 0%, plus or minus 1%, you could say that the two studies were significantly different from each other. This might mean that the first included fraudulent precincts; or it might mean it was a biased study. Interestingly, the BYU exit poll in Utah was closer to the count than the E-M poll. I am not sure (without checking) whether either was outside its own MoE. I am pretty sure the BYU one wasn't. One could calculate whether the two polls were significantly different from each other. If they were, it might suggest that the problem was in the E-M poll. If not, it's just a lemon. There are a lot of lemons in the exit poll evidence.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 06:34 AM
Response to Reply #46
49. LOL
My short answer was way longer than my long answer.

The reason is that I didn't give you the long answer, although some of got shunted into the short answer.

The really short answer is that it is inferential statistics is fraught with bear-traps.

Printer Friendly | Permalink |  | Top
 
Chi Donating Member (921 posts) Send PM | Profile | Ignore Fri Dec-16-05 10:11 AM
Response to Reply #46
52. Your short answer had me taking notes...heh, j/k
I wasn't looking to differentiate bias in the polls from bias in the count.
I was looking to differentiate poll/count bias from the other factors that contribute
to discrepancy, like...
Weather, number of precincts at location, appearance of interviewer, distance from exits, etc.

I got the impression that, as a rule(?), as sample increases, all contributers of discrepancy decrease except
for bias, which gets more accurate.
Therefore we could infer a trend line toward accuracy from a 10k sample, to a 50k sample, and
we could get a better handle on the cause....bias OR other.

A flat, or increasing error (poll to count) would indicate bias.
A decreasing error would indicate other, as (at least) an appreciable portion.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 11:24 AM
Response to Reply #52
53. I really should have a rule
about posting before coffee...

Sampling error will certainly decrease as sample size grows. Let's assume we are talking about exit polls :)

As sampling error decreases, "real" factors associated with the discrepancy, will become more apparent, as the sampling "noise" becomes less. And what we would be looking at is all the factors you mention. One of the most common analytical technique is multiple regression, in which we would put all the factors we think may have had some effect on the discrepancy between poll and count into a "model" equatioin and see which ones turn out to be "significant". The multiple regression equation has this form:

Discrepancy = b1*bad weather + b2*number of precincts at location + b3*appearance of interviewer.......etc,

The technique finds the values for all those bs (called regression coefficients) that give the best fit to the data (leaves the smallest amount of leftovers). If a b is "insignificantly" different from zero, then we can say that that factor wasn't a very important contributor to the discrepancy. If a b value IS significantly different from zero, then we know that factor was important. So if b1 was, say 2, we could say that all other things being equal, the discrepancy was two points higher in precincts with bad weather as in precincts without bad weather. And so on.

The greater your sample size, the more chance you will have that the b values will be "significant" i.e non-zero. Usually this is phrased as: the larger the sample the more statistical power you have to detect small effects. But what it really means is, the less the sampling "noise", the tighter will be the Confidence Limits (or MoE) of each b value.

So your statement that "as sample increases, all contributers of discrepancy decrease except for bias, which gets more accurate" is sort of true. However, I tend to think of it the other way round - that as sampling error decreases, so all other effects become more apparent. Or: the larger the sample size, the more statistical power we have to detect anything genuinely associated with discrepancy.

Now, the problem is: we might know about things like weather, and interviewer's distance from exits, etc, but of course we don't know which precincts had fraud. So we can't put that in the equation. What we CAN do is to hypothesise where fraud would be most likely to show up. Say swing states. So we could add "b4 x precinct-in-swing-state" to the regression model. It wouldn't prove fraud - because there may be other reason for bias being higher in swing states, but it would be interesting. Ditto with voting technology.

So the only real way to distinguish between fraud effects and poll effects is to see how much of the discrepancy can be "accounted for" in a regression model in which the factors seem likely to be polling factors and how much can be accounted for by "fraud" factors. And then try to interpret the results!

Hope this is not too garbled.




Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 05:01 PM
Response to Reply #52
55. After yet more coffee
I realise I probably misunderstood what you are saying.

The thing is, that we can't measure either "bias" or "fraud" directly. We don't know how the people who should have been in the poll, but weren't, voted - because they weren't in the poll. And we don't know how many votes were stolen where, because the fraudsters won't tell us. So what we have to do is theorise. For example, for poll bias:

We have a theory that interviewers were more likely to select willing-looking voters, and that Kerry voters were slightly more likely than Bush voters to look willing (or friendly). Now, the interviewers were supposed to sample every Nth voter, where N is a number between 1 and 10 allocated to each precinct. Small precincts would have a low N (to get a big enough sample) and large precincts might have an N of 10. Now it is easier to stick strictly to a low N than a high N, and easier in sparse precincts than crowded ones. If people are streaming out in droves, it would be easier to depart slightly from your Nth voter protocol, especially if people were not very keen to respond - and pick the 11th or the 9th instead of the 10th, if the 11th or 9th looked more friendly. In a small precinct, where you are interviewing every 2nd or third voter, it is much harder to deviate from the rule without being aware of it. Anyway - this is just a hypothesis - it may not have been true.

But to test the hypothesis we can see if bias was greater where N was larger, and it was. Bias was also greater where there were other factors that might make strict Nth voter protocol more likely to have been compromised - where the interviewer was stationed far from the exit; were the voter did not feel well trained, etc. So at there is at least some evidence to support the theory that bias crept in where random sampling protocol was not strictly adhered to. But there might be another explanation

Similarly, we might have a theory that fraud would be more likely in swing states; or in precincts with DREs; or in precincts where Bush's vote was higher than expected. And these can be tested too.

So it is not really true that the "non-bias" factors will get less with bigger samples, but that the effect of anything on the discrepancy will become more apparent with bigger samples. However, the more of the bias we can "model", by including in the equations factors that really do affect it, the less noise we will have. And the bigger the sample the more factors we are allowed to put in the model.

Multiple regression is a lovely way to analyse data. But it won't tell us anything unless we have good hypotheses.

Have a happy holiday!
Printer Friendly | Permalink |  | Top
 
kiwi_expat Donating Member (526 posts) Send PM | Profile | Ignore Sat Dec-17-05 03:28 AM
Response to Reply #55
57. Isn't it WPE that is greater when N is larger etc. ?
Edited on Sat Dec-17-05 03:47 AM by kiwi_expat
"But to test the hypothesis we can see if bias was greater where N was larger, and it was. Bias was also greater where there were other factors that might make strict Nth voter protocol more likely to have been compromised - where the interviewer was stationed far from the exit; were the voter did not feel well trained, etc." -Febble

Isn't it WPE that is greater when N is larger, etc.? Bias may or may not be larger in those circumstances. Right?


(Always keeping in mind that by "bias" we mean non-respondent bias and/or fraud.)

Cheers.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Dec-17-05 04:36 AM
Response to Reply #57
59. Sorry, was using the terminology
in chi's sense. I meant "bias" as in "non-response bias". Yes you are right. I was using the term "discrepancy" to stand in for WPE to avoid the issue of how you measure "discrepancy" - WPE is one way, but a poor way, as it underestimates the discrepancy in extreme precincts.

Thanks for the clarification!
Printer Friendly | Permalink |  | Top
 
kiwi_expat Donating Member (526 posts) Send PM | Profile | Ignore Sat Dec-17-05 09:38 AM
Response to Reply #59
62. On edit: I see that we agree (I think).
Edited on Sat Dec-17-05 10:21 AM by kiwi_expat
So we both agree with the statement:

"But to test the {non-response bias] hypothesis we can see if the DISCREPANCY was greater where N was larger, and it was. The DISCREPANCY was also greater where there were other factors that might make strict Nth voter protocol more likely to have been compromised - where the interviewer was stationed far from the exit; were the voter did not feel well trained, etc."


Or are you saying that you/Mitofsky have run separate analyzes on the individual factors, and did indeed find that, for example, the farther away from the poll the interviewer was stationed the more the results tended to favour KERRY??
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Dec-17-05 11:04 AM
Response to Reply #62
63. This is the tricky part
There is no reason to suppose that simple non-random sampling (i.e. departure from strict Nth voter protocol) would induce bias in a particular direction unless there was also an underlying tendency for willing-looking voters also to tend to be Kerry voters. Given the latter, then non-random selection might be expected to exacerbate the underlying bias.

So the hypothesis is something like: if Kerry voters were more likely to look like willing participants, then the more leeway the interviewer had for selecting willing-looking (as opposed to Nth) voters, the more pro-Kerry would be the poll (and the redder the shift in the vote). The E-M report tells us that WPE was more negative where various factors likely to introduce leeway were present. In this sense, the participation bias theory is supported (I'm not terribly happy with the term non-response bias - it is possible that there was non-response bias as well, in the strict sense of more Bush voters actually overtly refusing to participate than Kerry voter, but the factors noted in the E-M report suggest selection bias rather than non-response bias in the usual sense of the term - though there is a sense in which looking unwilling to participate can be thought of as a form of non-response bias).

So yes, your first statement looks right to me, except that I will be even more specific and say that the discrepancy in the direction of greater proportion of Kerry responses than Kerry votes was greater where yadda yadda....

Printer Friendly | Permalink |  | Top
 
Peace Patriot Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 05:58 PM
Response to Original message
10. Recently, four Ohio ELECTION REFORM initiatives that were predicted
Edited on Wed Dec-14-05 06:08 PM by Peace Patriot
to win by 60/40 votes were flipped over on election day to 60/40 LOSSES!--the most audacious flipover yet. The machines and their masters are now dictating election policy and preventing reform--a chilling Orwellian twist. Ohio may be a special case of Republican corruption and tyranny, but the methods of massive vote-switching that they are testing out there are undoubtedly intended for wider use. (The 2004 presidential flipover was a more modest 3% to 5%, at least as far as we can tell, with the exit polls as a guide. Anecdotal on enthusiasm for Kerry-Edwards points higher; as would inclusion of all purged minority voters--at least 1 million black voters, estimated by Greg Palast.) (Exit polls only count those who make it to the polling booth.)

See Bob Koehler's article about the Ohio initiatives:
http://www.tmsfeatures.com/tmsfeatures/subcategory.jsp?custid=67&catid=1824

How many impossible election "anomalies" do we have to see before we...

Throw Diebold and ES&S election theft machines into 'Boston Harbor'!

"TRADE SECRET," PROPRIETARY software and firmware 'tabulating' our votes; owned and controlled by Bushite corporations; with virtually no audit/recount capability; and with the war profiteering corporate news monopolies DOCTORING their exit polls (Kerry won) to "fit" the results of Diebold's and ES&S's secret formulae (Bush won).

You really don't need ANY statistics to determine that the 2004 election was INVALID on its face. But WITH the statistics, including these by TIA, and WITH the many impossible "anomalies" that have been documented--including several studies showing impossible skews to Bush in electronic vs. paper voting--and with all the other mountain of evidence (new voter registration favoring Dems 60/40, new voters, independent voters and Nader voters all favoring Kerry by big majorities, flatlined pre-election approval polls for Bush, sinking like the Titanic just afterward, issue poll after issue poll showing huge fundamental disapproval by Americans of every Bush policy (Iraq war, torture, the deficit, you name it--going back two years), etc., etc., etc., etc., what we have here, in the 2004 election, is a CRIME of the first magnitude, that the war profiteering corporate news monopolies are covering up and black holing because they colluded in it, not only by falsifying their exit poll data, but also by failing to warn the American people about the vast insecurity and hackability of these electronic voting systems, now documented by the Government Accounting Office.

The GAO warns that the next election will be no more secure--and, at the pace that Diebold and ES&S are peddling their crappy, insecure, hackable wares around to the states, it appears to me that it will be EVEN LESS SECURE.

Edison-Mitofsky (the news monopolies' exit pollster), for their part, have promised that we will never again get to see their real exit poll results. They are going to make sure of that.

The situation is very bad. Steps we can take:

1. Demand that the DNC fund INDEPENDENT EXIT POLLS. Write to Dean. This verification tool is desperately needed. We have NO verification tools, and will be in a worse spot than we are now, since the news monopolies are intent on withholding the real results of theirs.

2. Support, donate to: www.UScountvotes.org - a project for statistical monitoring and challenges of elections in '06 and '08.

3. Investigate and organize other verification tools, such as "parallel elections."

4. Educate the public. They will not be disheartened by it, they will be GREATLY RELIEVED to know WHY their vote seems always to be wasted, and why the will of the majority--on the war, for instance--is not being done. Tell them to vote anyway. Never, never, NEVER give up on your right to vote. *NEVER!* Big turnouts CAN conceivably overcome the fraud. Depends on the circumstances. (Kerry would have overcome the fraud if he'd been antiwar, in my opinion.) Suggest citizen tasks that need to be done to get rid of these machines. KNOWING that it's rigged REDUCES demoralization, depression and disempowerment, when the election is stolen--not the other way around. It doesn't hurt; it helps. And truth is all! --if we believe in democracy.

5. Russ Holt's bill HR 550 will stop the corporate privatization of our elections in its tracks, and reverse it, by, among other things, prohibiting the use of undisclosed software in 2006. It has 169 co-sponsors (mostly Dems). Sign the petition. http://www.rushholt.com/petition.html

6. Info: www.votersunite.org, www.verifiedvoting.org. And the GAO report...
Access to pdf: http://www.gao.gov/docsearch/abstract.php?rptno=GAO-05-956
Text only: http://www.gao.gov/htext/d05956.html

Thanks to TIA and Autorank for your brilliant work!



Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 06:08 PM
Response to Reply #10
11. The Columbus Dispatch Poll--Excellent Post Peace Patriot
http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_content_id=1001571578
This poll is so good there are scholarly articles written about it. I don't have time to find it now but will later and I'll post one of the links.

They were judged the most accurate newspaper poll based on 32 predictions.

Now they goofed! Oh sure, and there are always the "enablers" out there that jump through hoops of fiery B.S. to make the point. Wonder why?

The Editor & Publisher link above is a good place to start. Needless to say, fraud is clearly abundant.

And then there's this on Diebold, which was installed in Ohio just before the special election, and this is just one of the many problems Diebold has.

http://www.democraticunderground.com/discuss/duboard.php?az=view_all&address=132x2310273

But hey, people can "think" what they want and it is supposedly a "free" country unless you're one of the victims of the * Administration and you have to pay the rent.

Great Post!!!
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 06:51 AM
Response to Reply #11
28. here's a poli sci analysis of 2005: "Fraud in Ohio? Doubtful."
http://polysigh.blogspot.com/2005/11/fraud-in-ohio-doubtful.html

The author, Philip Klinkner, has been rightly praised on this very board for his analysis of racial inequities in "residual voting."

For folks who are actually interested in the Columbus Dispatch poll, as distinct from indiscriminate statements that it's a really good one (which is true, as far as it goes), Mark Blumenthal again has set the standard for others to match: http://www.mysterypollster.com/main/2005/11/columbus_dispat.html

What has been lacking so far, as far as I can tell, is any serious and sustained effort by the folks who Just Know there was fraud to rebut arguments like these. I have nothing against Koehler, but he shows no particular sign of even understanding the arguments against Ohio 2005 fraud, much less being able to rebut them.
Printer Friendly | Permalink |  | Top
 
anaxarchos Donating Member (963 posts) Send PM | Profile | Ignore Thu Dec-15-05 01:31 PM
Response to Reply #28
33. I'm the one who brought up Klinker...

...many times. And, I think highly of him.

I also tend to agree with him on Ohio in 2005.

Do you agree with him on 2000 and 2004?

Turnabout is fair play.

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 02:24 PM
Response to Reply #33
34. yes, that's fair -- and first of all, his name is Klinkner
I'm not aware of disagreeing with any of Klinkner's conclusions about 2000 or 2004, but I'm not sure what you have in mind.

He found that spoilage disproportionately affected racial minority voters in 2000 as in other elections, yes? I have no reason to doubt that.

I think with respect to 2004, he also argued that there wasn't a huge turnout surge among evangelicals -- perhaps none at all. I'm fine with that. The whole X-million-evangelical thing has never resonated with me.

Ah, and here is a post called Kerry Didn't Win. I agree with that, too. Dunno if he has changed his mind since.
Printer Friendly | Permalink |  | Top
 
anaxarchos Donating Member (963 posts) Send PM | Profile | Ignore Thu Dec-15-05 05:02 PM
Response to Reply #34
36. Yes, his name is Klinkner...

But if you google on Philip Klinker, you will find 4 times as many references to him (same guy) as under Philip Klinkner... including many academic references. He is perpetually doomed to have his name mangled and I plead guilty (even though I know better). I haven't checked to see if you are a.k.a. Mark Limburger.

In 2000 he "found" something much more important than what you report. That "spoilage disproportionately affected racial minority voters" has been known a LOT longer than that (as you should know since you apparently follow my references). He noted what appeared to be INTENTIONAL disenfranchisement in Florida (i.e. "fraud") and called for an investigation (which never happened).

In 2004 he argued for spoilage among Hispanic ballots and against "turnout surge among evangelicals". Those are both key points and have both come up in our "discussions".

As far as what he thinks personally? Who knows... As I have already said, I am less inclined to buy into Ohio fraud in 2005 than in 2004, simply because the stakes weren't the same in this off-year. Can you talk me out of that position? Sure... with enough evidence. Am I useful as a source for no fraud in Ohio in 2005 (as in "Here's someone who used to believe in fraud but now doesn't...")? Shit no...

And as far as "Kerry Won", check the date...
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:45 PM
Response to Reply #36
37. well, umm...
Edited on Thu Dec-15-05 05:56 PM by OnTheOtherHand
Are we talking about "Whose Votes Don't Count?"? (Uh, no, anax, I don't follow your references especially closely.)

Yes, first he finds that spoilage rates are associated with % black even controlling inter alia for education, literacy, and income. He further finds that spoilage rates are greatest in Republican counties (EDIT: I stupidly wrote "precincts") with high proportions of blacks, where he notes that the incentives to disenfranchise blacks are presumably highest. It's not open-and-shut, but it is very provocative. (And in any debate involving John Lott, it is prudent to bet against Lott.)

OK, I will take a further look at Hispanic spoilage. Especially in NM, or also in FL, or nationally? I haven't come across his work on this yet, or if I did, I've blanked it out. I do not consider the evangelical argument key, although I acknowledge that you do.

I did check the date on "Kerry Didn't Win," which is why I left open the possibility that he had changed his mind. And if he hasn't, he might yet.
Printer Friendly | Permalink |  | Top
 
anaxarchos Donating Member (963 posts) Send PM | Profile | Ignore Fri Dec-16-05 01:44 AM
Response to Reply #37
44. well, umm...

...You might follow the references "more closely". You might explore what happened to Klinker's research (there I go again, Klinkner). Did it gain "acceptance"?

As far as "Kerry Didn't Win" goes, it is a three paragraph reminder to Palast, two days after the election, that there were not enough residual votes in Ohio to account for the margin. Despite the unfortunate title, that is not what the snippet is about.

As far as the Evangelical argument goes, OK... it's not "key" to your "argument". What is? Who voted for Bush?

We have a close 2000 election and a close 2004 election. In 2004, we get a huge turnout. Should be a squeaker. Kerry gets new voters. Kerry gets Nader voters. Kerry has 1 million "residual votes", "lost" in 2000, suddenly counted in 2004. Bush does not get Rove's three million evangelicals who sat out 2000 (not "key" to your argument). Bush wins by 3 million votes. Man, we're approaching nearly 10 million votes that are not "key" to your argument....

Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 07:31 AM
Response to Reply #44
50. I'll upgrade that to "grrrr"
I have no idea what your point is about Klinkner, yet, but this further quotation might help:

"It is really incremental movement," he (Klinkner) said of Mr. Bush's re-election. "The correlation between the vote in 2000 and 2004 was about as high as any pair of elections since the late 19th century. Essentially, Bush did 3 percentage points better this time, and he did so everywhere."

http://polysigh.blogspot.com/2004/11/polysighers-in-news.html

It's quite true that the "Kerry Didn't Win" post didn't address itself to rebutting whatever decisive-fraud arguments you may or may not think it should have addressed itself to rebutting. But it seems safe to infer that he didn't believe those, either.

In 2004, Bush had the advantage of incumbency. The pre-election polls showed a close race, with high turnout, and on average Bush slightly ahead. I think you have fundamentally confused yourself that one can say '2000 was close, 2004 was close, ergo 2004 was a replay of 2000, ergo we can assume that new voters determined the outcome.'
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 01:53 AM
Response to Reply #36
45. Hispanic Ballot Spoilage Chapter; Ohio 2005 was a "clunker."
Edited on Fri Dec-16-05 01:57 AM by autorank
This article argues for Hispanic spoilage, and demon states is quit clearly but buries it in quasi scientific statistical analysis at it's most boring. Quite a conclusion.

Ohio 2005 occurred twice: in the Ohio 2nd District election producing the lovely Jean Schmidt. There were major questions about this election and magical special effects. Land Shark and I had a nice exchange on the subject where we concluded that the obvious need for a recount here was the responsibility of the citizens not the politicians, since our interests are consistent and theirs are more difficult to fathom.

As for the other Ohio 2005, I'll place my money on the Dispatch Poll for two reasons. First it has a great track record, one that is documented and discussed at some length. Second, because the second magical event in Ohio in 2005 occurred JUST AFTER Secretary of State Blackwell installed Diebold machines in 1/2 of the Ohio counties accounting for a majority of the votes. Lets see, Diebold stinks to high heaven six ways form Sunday, a prestigious poll predicts a slam dunk for at least two of the issues, and there is an almost exact reversal of the poll numbers in the vote on those two issues. This is not the type of stuff amenable to academic debate, particularly when the academics like those cited totally cave in the face of an amazing reversal. Nothing much happened in the last 9 days of the campaign, nothing except the installation of the Diebolds.

It's a rigged game. The Ohio special election was the right wing stepping out, over reaching in an experiment to see just how much shit the American public would eat. Apparently a great deal.

We're screwed period unless someone gets right in Diebold's face. It's starting in California. Let's hope it spreads.

CA Activists Call Cops on Diebold!.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 03:42 AM
Response to Reply #45
47. I'd keep the Diebold argument
Edited on Fri Dec-16-05 04:06 AM by Febble
separate from the spoilage argument.

Is there any evidence that spoilage of Hispanic or African American votes is higher on Diebold machines? I think this problem predates, and sadly will post-date, Diebold.

In NM, the problem appears to be worse on push-button than on touch-screen DREs1. And punchcards and levers aren't in the clear either1.

Edited to give references:

1http://uscountvotes.org/ucvAnalysis/NM/NMAnalysis_EL_JM.pdf

2http://www.civilrightsproject.harvard.edu/research/electoral_reform/ResidualBallot.pdf
by Edley, Klinkner, Benson and Weaver, which cites a study by the Government Reform Committee of the U.S. House of Representatives that "found a disparity of 5.7 percentage points when voters used punch-card machines, which fell to 3.6% with lever voting, 1.6% for electronic voting, and 0.6% for precinct-counted Optiscan voting".
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 10:10 AM
Response to Reply #47
51. The article talks about punchcards. Diebold was a separate topic.
The article reference has to do with the spoilage. It's worth reading.

My broader point was clearly separate, the relationship between a voting machine manufacturer and, a nasty one, and consistent anomalies in Ohio. There was no connection made between Diebold and the article.

The problems in NM are finally unfolding as a result of the law suit there. The problems are across the board. A test by David Dill of Verified Voting showed that selecting straight party then selecting party members on DRE's resulted in cancelling out votes. Let's see what happens down there with real evidence.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 11:38 AM
Response to Reply #51
54. Yes, thanks, I see that.
I've just got a bit of a bee in my bonnet about the issue of residual votes, as there is a mass of evidence suggesting that these are higher in precincts where punchcards are used and where there is large number of African American voters - and are thus like to be an important loss of votes for the Democratic candidate, quite apart from being a straight Civil Rights issue.

And apart from the NM case, where it looks as though a specific (older, presumably) type of DRE was associated with undervotes in precincts serving predominantly Hispanic of African American communities, the evidence seems to be that DREs have a rather better record. Which is why I bang on about keeping the security issue separate from the residual vote issue. I think DREs are an appalling way to vote, but the one thing in their favour is that they may keep residuals down. On the other hand, by rationing them in Franklin county, Ohio, it kept Kerry's vote down.

If you can't do it our way (pencil and paper and handcounts), then I'd go for 0ptical scanners with open-source software and mandatory random audits. Cheap and checkable.

And in any case, it is clearly completely unacceptable for any voting machine manufacturer to have any links with any political party. To any Brit it's literally unbelievable (I mean my friends don't believe me.)
Printer Friendly | Permalink |  | Top
 
OnTheOtherHand Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 06:57 AM
Response to Reply #10
29. just for the record, on future exit poll results --
"Edison-Mitofsky (the news monopolies' exit pollster), for their part, have promised that we will never again get to see their real exit poll results. They are going to make sure of that."

Do you have any evidence for this assertion?

As far as I know, what E-M actually has said is this (from p. 17 of their January evaluation report):

"The decision by the NEP members to withhold the distribution of exit poll information within their organizations until 6PM ET on election day will help prevent, or at least delay, the use of exit poll data before poll closing by those who have not purchased the data." (Emphasis added)

That wouldn't prevent, for instance, cnn.com from posting the estimates as the polls close in each state, as it so memorably did in 2004.
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Dec-14-05 11:49 PM
Response to Original message
17. UKRAINE Declares Exit Polls Invalid Retroactively--Yuschenko Out!
(Satire, sort of...;)

UPDATE, STUNNING REVERSAL IN THE UKRAINE
Exit Polls Declared Invalid, Commie Stooge Poisoner
to be Re-Installed as Faux President.
Masses Weep!



Kiev, UK. 4/1/06. Now that "exit polls" have been thoroughly discredited in some corners, there's a move to kick out the "Orange Revolution" winner Yuschenko and reinstate the odious Yanukovich, widely suspected of poisoning the popular democrat Yuschenko. Yes it's all "true." Exit polls have no validity unless they match up with the final vote. After all, the American pollster Mitofsky set a precedent. His final exit poll just hours after the polls closed showed Kerry winning by 3% but 12 hours later, by adding in a factor for the final vote count, the margin was reversed to show a Bush victory.

The victorious Yanukovich said he wanted a "humble" foreign policy and more Russians (a minority ethnic group in the Ukraine) in his cabinet. He also banned any articles like the following and any further instruction in applied mathematics in Ukrainian schools.

TYPICAL OF BANNED AND PURGED ARTICLES:
Published on Sunday, December 26, 2004 by Reuters

Exit Polls: Liberal Yushchenko Wins Ukraine Election


http://www.commondreams.org/headlines04/1226-06.htm

KIEV, Ukraine - Exit polls in the re-run of Ukraine's presidential election Sunday said liberal challenger Viktor Yushchenko had beaten Prime Minister Viktor Yanukovich by a wide margin.

Yushchenko, who called crowds of supporters into the streets to denounce cheating in the last poll, scored 56.5 percent to 41.3 percent for Yanukovich, according to a poll by the Kiev International Institute for Sociology and the Razumkov Center.

<snip>

Yanukovich had initially been declared the winner in last month's run-off vote, but his victory was overturned by the Supreme Court which agreed with opposition charges that the election was rigged in his favor.

The figures in the first exit poll represented 80 percent of a sample of 30,000 voters who were polled across the former Soviet republic. The second exit poll surveyed 13,000 voters.
----------------------------

WHO WILL BE AMERICA'S MINISTER OF TRUTH AND FINAL JUDGMENT?
Printer Friendly | Permalink |  | Top
 
foo_bar Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 01:55 AM
Response to Reply #17
18. neocons funded the exit polls you cite
A key part of the media game has been the claim that Yushchenko won according to "exit polls". What is not said is that the people doing these "exit polls" as voters left voting places were US-trained and paid by an entity known as Freedom House, a neo-conservative operation in Washington. Freedom House trained some 1,000 poll observers, who loudly declared an 11-point lead for Yushchenko. Those claims triggered the mass marches claiming fraud. The current head of Freedom House is former CIA director and outspoken neo-conservative, Admiral James Woolsey, who calls the Bush administration's "war on terror" "World War IV". On the Freedom House board sits none other than Brzezinski. This would hardly seem to be an impartial human-rights organization.

http://www.atimes.com/atimes/Central_Asia/GA20Ag01.html

Western-funded exit polls showed Yuschenko was gonna win, Russia-funded that Yanukovych was gonna win. We only heard about the former tho.

http://www.indymedia.org.uk/en/regions/liverpool/2004/12/302679.html

More provocatively, the US and other western embassies paid for exit polls, prompting Russia to do likewise, though apparently to a lesser extent. The US's own election this month showed how wrong exit polls can be. But they provide a powerful mobilising effect, making it easier to persuade people to mount civil disobedience or seize public buildings on the grounds the election must have been stolen if the official results diverge.

http://www.guardian.co.uk/ukraine/story/0,15569,1360297,00.html

Pro-Yanukovich television stations used Russian exit poll results to vouch support for him as outcries against falsification increased and the CEC stalled in delivering official results. In addition, the exit poll did not really exist in Ukrainian political culture before 2004. Voters, who already believed the vlada would falsify the election months in advance, had a new tool at their disposal to reconfirm their suspicions. Steele posits that the exit polls, funded by the US and Russia, influenced the voters to take to the streets.

http://leopolis.blogspot.com/2004_12_01_leopolis_archive.html

Strange bedfellows, eh?
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 02:49 AM
Response to Original message
19. Exit polls confirm election outcomes--to say otherwise is to accept fraud.
And there was fraud in 2000 and 2004. Everybody who has reviewed the evidence knows that. I don't need to cite anything here because it's all here in other posts, elsewhere on numerous sites, and hidden away in the Diebold, etc. corporate offices and the database of the 2004 Exit Polls sequestered in a profound act of hypocrisy by the the network consortium which "owns" the Exit Poll results.

The Exit Poll debate is ultimately a rigged game by EM and the network/news paper consortium that sponsored the polls. The trickery is on the order of "Three Card Monty." Exit polls are conducted on election day with a full paper trail by an organization that has gained increasing respect for accuracy in matching presidential outcomes. The summation of the polls shows a Kerry win. The data is captured and available for all to see despite "embargo's" on use in broadcast and print. Controversy ensues over the 2004 election based on clear signs of voter suppression and irregularities in precinct level, county level, and state level results. One means of checking the actual vote counting, which is in question, is through exit polls. A requirement to check the exit polling is a look at the primary data. Rep. Conyers writes Mitofsky and says, give us a look. He's told no. Conyers asks again and he's sent the NBC, CBS, Fox, ABC, PentaPost, media consortium. They own the data, he's led to believe. He asks them for the data. Does he get it? No. Some reason or other, some claim of confidentiality, etc. which could be handled easily. It happens all the time in other professions and no one's hurt. But the answer continues to be no.

What could this mean? Several things. 1) The media consortium won't cooperate in one of the biggest questions in our modern history, helping with unique information that will help validate or invalidated claims of fraud. Why does CM hate democracy? 2) The media consortium won't cover the story of (a) the likelihood of election fraud based on the exit polls THEY CONDUCTED because (b) the final validation of the polls is stalled by THEIR REFUSAL to release the raw data for inspection and evaluation. That's what we call bait-and-switch. The FOX, NBC, ABC, CBS, PentaPost consortium fails to cover a story about itself which implicates the consortium in the theft of Election 2004 and uses questions about the validity of exit polls as an excuse WHEN THOSE VERY QUESTIONS EXIST BECAUSE THE CONSORTIUM WON'T ALLOW EXAMINATION AND EVALUATION OF THE PRIMARY DATA.

It's a nice, neat little package.

This also enables the critics of the election poll evidence to tap dance to the enabling tune of Bush Daddy supporters. Of all the evidence of election fraud, the exit polls are the unifying force, the only national evidence, the slam dunk indicating fraud, the one vote on election day with a COMPLETE PAPER TRAIL. This evidence must be crushed at all costs.

But it can't be crushed. There is already enough evidence out to show the strengths of the State Exit Polls and the first four National Exit Polls. We're too far into evidence for the game to be truly rigged. It's all a rear guard action by the network consortium, necessary by their logic. After all, to allow further validation of the 2004 exit polls would place Bush legitimacy in real question. What would that do to GE defense sales (NBC)? How would it impact Sumner Redstone's grand media plans (CBS)? Would the Washington Post company have regulatory problems with their various corporate enterprises if such a story broke? The CORPORATE MEDIA (CM) may be short sighted about profitability, but it's got long term designs on corporate positioning. They advance their own version of progressive politics: Socialism for the rich, free enterprise for the poor (and they must love the poor, they try so hard to expand that group through the betrayal of American workers!).

An intellectually honest position for all involved would be to demand that the data be released universally, not just to "favorites," and demand that that be done immediately. On this, I am sure, we can all agree.

As one of my favorite DU posters, understandlinglife says,

NAMASTE

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 04:55 AM
Response to Reply #19
21. Honestly, auto
Edited on Thu Dec-15-05 05:28 AM by Febble
this is bunk I don't find this very persuasive.

You say:

Of all the evidence of election fraud, the exit polls are the unifying force, the only national evidence, the slam dunk indicating fraud

Frankly, if this is "the only national evidence" then you don't have a case.

Exit polls DO NOT confirm election outcomes. They can be useful, they can indicate that there may be a problem. Maybe a serious problem.

But they "confirm" absolutely nothing.

The reason you do have a case worth making, and the reason why I believe that the attention given to the exit poll discrepancy was actually useful, is that it helped mobilize people to find hard evidence that does indeed suggest that Bush's victory was unfairly won. The Conyers' report contains much better evidence than anything in the exit polls, particularly of systematic disenfranchisement of minorities. The challenge of the Electoral College votes for Ohio was a triumph. I wept tears of gratitude that this time, the voices of the disenfranchised would actually be heard.

But the exit poll story has acquired a life of its own, a life, I might say, "beyond the grave". The exit polls may have a paper trial (and the raw data can be downloaded by you, now, if you want). But it remains a survey of 0.1% of voters i.e. one-in-a-thousand). Not a lot of paper.

To say that the exit poll evidence does not confirm fraud is NOT to accept fraud. It is simply to accept the limitations of survey evidence. You need more. You've got more.

(Edited after a shot of virtual egg-nog)
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:05 AM
Response to Reply #21
23. I say "NAMASTE" and you say "this is bunk."
What's happened to the "Festivus" spirit around here.

The Exit Poll was superior to the election in 2004, period.

Good luck on your examination or whatever.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 05:23 AM
Response to Reply #23
25. Sorry, I was feeling a bit cross
Edited on Thu Dec-15-05 05:35 AM by Febble
Well, you can assert that the exit poll was superior to the election if you want. Frankly I think they were both pretty appalling, from the long lines in Ohio, to the response rates in the poll.

I'll delete "bunk" if the post is still open to edit.

But in the Festival Spirit I will state that:

Those of us who post on this forum are united in our belief that American democracy needs radical reform. We are only divided by our degrees of skepticism over what surveys can tell us about what happened in 2004.

Peace. And have a happy holiday.


(edited for grammar)
Printer Friendly | Permalink |  | Top
 
sunshinekathy Donating Member (177 posts) Send PM | Profile | Ignore Thu Dec-15-05 05:02 AM
Response to Original message
22. Excellent, Clear Explanation - Thank you.
Edited on Thu Dec-15-05 05:34 AM by sunshinekathy
The pattern of exit poll discrepancies can be read because NEDA has derived the algebra to determine what discrepancy patterns vote miscounts cause and what discrepancy patterns exit poll response bias causes.

The Ohio exit poll discrepancy pattern clearly shows a pattern that would be produced by outcome changing vote miscounts which will be evident once we release our Ohio precinct-level exit poll analysis soon.

Some people are still having trouble letting go of invalid illogical analyses to the contrary.

Of course it is correct that exit polls can only produce evidence to trigger on-the-ground investigation and vote recounts. Vote miscounts cannot be "proven" with exit poll data. However, if the pollsters would release more data publicly like they should (including precincts' voting machine type and vendor and pollsters conditions and factors), it would be possible to do more investigation to determine exactly what happened. Of course when things continue to look very suspicious like they have, then the exit poll data should be released down to the exact precincts so that they can be investigated.

Or if counties would release the detailed vote counts rather than conglomerating the counts to hide the evidence of vote miscounts and tampering, that would give us indication of what is going on in every precinct and every vote type.

And if we could obtain funding to fully construct a public national election data archive and collect the data and do the statistical programming, then this sort of analysis could be almost instantaneous after obtaining the data. And we have every legal right to the detailed vote count data, so this is one of the few do-able solutions that might prevent the wrong candidates from being sworn into office following the next election, but only if development is begun ASAP.

Very nice clear explanation.

I will copy it and reread it and perhaps plagerize a few sentences to include in our Ohio analysis if I find it useful and if it is OK with you.
Printer Friendly | Permalink |  | Top
 
Name removed Donating Member (0 posts) Send PM | Profile | Ignore Thu Dec-15-05 10:37 AM
Response to Original message
30. Deleted message
Message removed by moderator. Click here to review the message board rules.
 
kansasblue Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 01:24 PM
Response to Original message
31. for the record, I've talked to my RW friends about election fraud and..

The are in a 'prove it to me so easily with big pictures and graphs and make it soooo easy everyone can understand so I don't really have to think about it' mode. And when that doesn't happen they are 'it's your fault and I don't have to concern myself about it'.

Anyway.... anything that takes the process out of the complex analysis of the great TIA (I luv ya but your stuff is hard to read) mode is of great help.

Printer Friendly | Permalink |  | Top
 
kansasblue Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 01:29 PM
Response to Reply #31
32. (is it just me? I always think of TIA as John Kennedy) nt
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 06:44 PM
Response to Reply #31
39. Happy to oblige but for the realy smart ones, this post should help.
Edited on Thu Dec-15-05 06:45 PM by autorank
By really smart I mean those familiar with Excel, with math of any kind in a serious way, and those willing to take a serious look at this particular text. It's quite accessible to the math literate.

Now, for something really simple, here you go:

M.C. Millers Harpers article:

http://www.harpers.org/ExcerptNoneDare.html

Happy Festivus!!! to you and all of Kansas. May you emerge from the Winter a lighter shade of Blue!


Printer Friendly | Permalink |  | Top
 
nicknameless Donating Member (1000+ posts) Send PM | Profile | Ignore Thu Dec-15-05 07:28 PM
Response to Original message
40. Another kick
And more thanks to TIA and autorank.

:kick:
Printer Friendly | Permalink |  | Top
 
FULL_METAL_HAT Donating Member (673 posts) Send PM | Profile | Ignore Fri Dec-16-05 01:12 AM
Response to Reply #40
43. I second that :^) n/t
Printer Friendly | Permalink |  | Top
 
autorank Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 01:05 AM
Response to Original message
42. autorank's Christmas Holiday Offer of Unity-Intellectual Honesty Spurned!
Edited on Fri Dec-16-05 01:06 AM by autorank
Previously, I said:

"An intellectually honest position for all involved would be to demand that the data be released universally, not just to "favorites," and demand that that be done immediately. On this, I am sure, we can all agree.

As one of my favorite DU posters, understandlinglife says,

NAMASTE"


"Unity" defined:

1. The combination or arrangement of parts into a whole; unification.
2. A combination or union thus formed.

==>I suggested that those who disagree on a subset of issues, the validity and usefulness of the Exit Polls, reach a state of "unity" on a common position, a "union" of concerned citizens.

"Intellectual Honesty" defined:

A Broad Definition of Intellectual Honesty: Honesty in the acquisition, analysis, and transmission of ideas. (University of California, Irvine Guidelines)

I suggested that a union of those who endorse and those who criticize the Exit Poll analysis be formed to insist that the raw data from the exit polls be released. I've suggested this before but at this point I thought there would be a real chance to reach a "union." The drawbacks of such a release are nil. Confidentiality of subjects can be protected with the same procedures available in other disciplines. It's not hard and we should hear that argument again.

The only argument against the broad release of the raw exit poll data, as requested by Rep. Conyers, rests on an intellectually dishonest position on "the acquisition, analysis, and transmission of ideas." In this case, the ideas in question are democracy, a free people, government through the rule of law, transparency in government, etc. The failure to endorse this open policy leaves us with only the "raw power" approach of Mitofsky and the Corporate Media consortium which owns the data: IT'S OUR DATA AND YOU CAN'T HAVE IT. TAKE A HIKE.

Surely we can all agree that the intellectually honest position is a united front insisting in unambiguous terms that the data be released.

I can assume that the vast majority of DU would favor this. How about the exit poll critics?

It's that time of year.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Dec-16-05 06:01 AM
Response to Reply #42
48. Yes, it's that time of year
Edited on Fri Dec-16-05 06:03 AM by Febble
and I agree with you on a lot, auto.

Regarding data "release": I do think confidentiality is a serious issue, and that it is not simple to solve. For the ESI study it was solved using a fairly labour-intensive technique devised by one of the authors (Fritz Scheuren) in 1986, and the data was prepared, by Mitofsy and Dingman, according to Scheuren's specifications. I don't know whether the specifications were specific to the hypotheses tested in the study. I don't think so. But the point is that it was a collaborative project. The fact that it happened indicates that it could happen again, presumably if a group of academics proposed a similar project. But it is was certainly a lot more complex than simply "releasing" data into the wild like marine mammals.

That said, I'd like to see other independent studies (my understanding is that the ESI analysis was independent, although the data preparation was necessarily done in-house by Mitofsky) that can be subjected to peer-review. I understand the ESI study is undergoing peer-review at present. I would support any move for such a study.

The trouble is, I doubt some people would believe the answers, if the data were prepared first by Mitofsky et al. ESI hasn't had an easy ride.

However, you can, yourself, download the "paper trail" - the database of responses to the questionnaires that were actually used in the cross-tabulations on election day (phoned in by each interviewer, reading from the questionnaires). They are here:

ftp://ftp.icpsr.umich.edu/pub/FastTrack/General_Election_Exit_Polls2004/

and have been since January. And if you believe that the only error in the polls is sampling error i.e. that the responses are a true random sample, they are actually all you need. It gives every response, by state, and demographic. You can see for yourself the extent to which the responses differed from the state counts. It's your paper trail.

I don't think it's much use, myself, but it's there. The actual raw data.

Have a very happy holiday! And I agree with you that vote spoilage was, and remains, a serious issue, and one that preferentially hurts Democratic voters. This, and voter suppression need urgent fixing, quite apart from the issues of paperless voting, which I agree, is also a serious one. As is the insecurity of the software.

In fact these wretched surveys are all we disagree on.

Peace and goodwill!

Lizzie

(edited for stupid error)


Printer Friendly | Permalink |  | Top
 
sunshinekathy Donating Member (177 posts) Send PM | Profile | Ignore Fri Dec-16-05 10:22 PM
Response to Reply #48
56. Informative, but Not Quite Accurate
Edited on Fri Dec-16-05 10:24 PM by sunshinekathy
The ESI study was not "independent" because Mitofsky was one of the authors listed. (and this ESI study never tried to explain the cause of the actual Ohio discrepancies and further its entire analysis was later mathematically proven to be based on an invalid illogical premise and was thus meaningless for making any conclusions.)

and

the "database of responses to the questionnaires that were actually used in the cross-tabulations on election day" included on
ftp://ftp.icpsr.umich.edu/pub/FastTrack/General_Election_Exit_Polls2004

do not represent the full set of questionaires.

Only a subset of the responses to the questionaires is included there.

Otherwise, a very informative post.

Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Dec-17-05 05:15 AM
Response to Reply #56
60. Exactly
That's why I said that the ESI study was a collaborative effort. But my point was that any study using "blurred" data would inevitably be collaborative, even if, as I believe was the case with the ESI study, the analytical part of the study is independent. The blurring, clearly, has to be done at Mitofsky's end, which, presumably, is why Mitofsky and Dingman are credited as authors, preceded by the word "with".

But you are of course right. No study involving "blurred" data will be independent, because the blurring part will not be.

As for your second point, the data in the UMich database does include all the data used in the crosstabs, including age, race and sex. However, it is, as you say, a sub-sample of all responses (E-M evaluation, p 55). It appears that full responses to about 60% of the questionnaires were entered into the database on election day. The remainder of the responses were used only to give totals of the responses to the presidential question, as larger Ns were required for this question. Totals of responses to the presidential question, as well as full responses to a subsample of the questionnaires, were called in by the interviewers at each of the three calls on election day. It is the questionnaire response database that is deposited at UMich, which also includes questionnaire responses from absentee voters.
Printer Friendly | Permalink |  | Top
 
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Sat Dec-17-05 05:25 AM
Response to Reply #48
61. As Kathy points out
The UMich data is the subsample (looks like about 60%) of questionnaires that were used for the demographic crosstabs. As it is a random sub-sample it should not be prone to anything other than sampling error (pieces of paper not being prone to non-response bias) so TIA's math should apply. In other words it should have the same statistical properties as the whole, plus a little extra sampling variance.

Printer Friendly | Permalink |  | Top
 
Neil B Forzod Donating Member (64 posts) Send PM | Profile | Ignore Sat Dec-17-05 03:45 AM
Response to Original message
58. nay
This kid still cracks me up every time. Still doesn't know jack about statistics, but he remains awesome just based on sheer boneheaded stubbornness alone.

Sampling error decreases with sample size, but other types of error don't, and in fact tend to increase. Which is all I'll say on the matter, since TIA still doesn't know the difference between different error types anyway. :eyes:

Also, you can't really claim to be ignoring people while simultaneously writing ten-page diatribes in response to their posts.

Still... awesome. :D

Nay!

Neil
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Sun Apr 28th 2024, 11:01 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Topic Forums » Election Reform Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC